145 research outputs found
Cross-convolutional-layer Pooling for Image Recognition
Recent studies have shown that a Deep Convolutional Neural Network (DCNN)
pretrained on a large image dataset can be used as a universal image
descriptor, and that doing so leads to impressive performance for a variety of
image classification tasks. Most of these studies adopt activations from a
single DCNN layer, usually the fully-connected layer, as the image
representation. In this paper, we proposed a novel way to extract image
representations from two consecutive convolutional layers: one layer is
utilized for local feature extraction and the other serves as guidance to pool
the extracted features. By taking different viewpoints of convolutional layers,
we further develop two schemes to realize this idea. The first one directly
uses convolutional layers from a DCNN. The second one applies the pretrained
CNN on densely sampled image regions and treats the fully-connected activations
of each image region as convolutional feature activations. We then train
another convolutional layer on top of that as the pooling-guidance
convolutional layer. By applying our method to three popular visual
classification tasks, we find our first scheme tends to perform better on the
applications which need strong discrimination on subtle object patterns within
small regions while the latter excels in the cases that require discrimination
on category-level patterns. Overall, the proposed method achieves superior
performance over existing ways of extracting image representations from a DCNN.Comment: Fixed typos. Journal extension of arXiv:1411.7466. Accepted to IEEE
Transactions on Pattern Analysis and Machine Intelligenc
Towards Effective Low-bitwidth Convolutional Neural Networks
This paper tackles the problem of training a deep convolutional neural
network with both low-precision weights and low-bitwidth activations.
Optimizing a low-precision network is very challenging since the training
process can easily get trapped in a poor local minima, which results in
substantial accuracy loss. To mitigate this problem, we propose three
simple-yet-effective approaches to improve the network training. First, we
propose to use a two-stage optimization strategy to progressively find good
local minima. Specifically, we propose to first optimize a net with quantized
weights and then quantized activations. This is in contrast to the traditional
methods which optimize them simultaneously. Second, following a similar spirit
of the first method, we propose another progressive optimization approach which
progressively decreases the bit-width from high-precision to low-precision
during the course of training. Third, we adopt a novel learning scheme to
jointly train a full-precision model alongside the low-precision one. By doing
so, the full-precision model provides hints to guide the low-precision model
training. Extensive experiments on various datasets ( i.e., CIFAR-100 and
ImageNet) show the effectiveness of the proposed methods. To highlight, using
our methods to train a 4-bit precision network leads to no performance decrease
in comparison with its full-precision counterpart with standard network
architectures ( i.e., AlexNet and ResNet-50).Comment: 11 page
You Can Generate It Again: Data-to-text Generation with Verification and Correction Prompting
Despite significant advancements in existing models, generating text
descriptions from structured data input, known as data-to-text generation,
remains a challenging task. In this paper, we propose a novel approach that
goes beyond traditional one-shot generation methods by introducing a multi-step
process consisting of generation, verification, and correction stages. Our
approach, VCP(Verification and Correction Prompting), begins with the model
generating an initial output. We then proceed to verify the correctness of
different aspects of the generated text. The observations from the verification
step are converted into a specialized error-indication prompt, which instructs
the model to regenerate the output while considering the identified errors. To
enhance the model's correction ability, we have developed a carefully designed
training procedure. This procedure enables the model to incorporate feedback
from the error-indication prompt, resulting in improved output generation.
Through experimental results, we demonstrate that our approach effectively
reduces slot error rates while maintaining the overall quality of the generated
text
A scalable unsupervised feature merging approach to efficient dimensionality reduction of high-dimensional visual data
To achieve a good trade-off between recognition accuracy and computational efficiency, it is often needed to reduce high-dimensional visual data to medium-dimensional ones. For this task, even applying a simple full-matrix-based linear projection causes significant computation and memory use. When the number of visual data is large, how to efficiently learn such a projection could even become a problem. The recent feature merging approach offers an efficient way to reduce the dimensionality, which only requires a single scan of features to perform reduction. However, existing merging algorithms do not scale well with high-dimensional data, especially in the unsupervised case. To address this problem, we formulate unsupervised feature merging as a PCA problem imposed with a special structure constraint. By exploiting its connection with k-means, we transform this constrained PCA problem into a feature clustering problem. Moreover, we employ the hashing technique to improve its scalability. These produce a scalable feature merging algorithm for our dimensionality reduction task. In addition, we develop an extension of this method by leveraging the neighborhood structure in the data to further improve dimensionality reduction performance. In further, we explore the incorporation of bipolar merging-a variant of merging function which allows the subtraction operation-into our algorithms. Through three applications in visual recognition, we demonstrate that our methods can not only achieve good dimensionality reduction performance with little computational cost but also help to create more powerful representation at both image level and local feature level
Mid-level Deep Pattern Mining
Mid-level visual element discovery aims to find clusters of image patches
that are both representative and discriminative. In this work, we study this
problem from the prospective of pattern mining while relying on the recently
popularized Convolutional Neural Networks (CNNs). Specifically, we find that
for an image patch, activations extracted from the first fully-connected layer
of CNNs have two appealing properties which enable its seamless integration
with pattern mining. Patterns are then discovered from a large number of CNN
activations of image patches through the well-known association rule mining.
When we retrieve and visualize image patches with the same pattern,
surprisingly, they are not only visually similar but also semantically
consistent. We apply our approach to scene and object classification tasks, and
demonstrate that our approach outperforms all previous works on mid-level
visual element discovery by a sizeable margin with far fewer elements being
used. Our approach also outperforms or matches recent works using CNN for these
tasks. Source code of the complete system is available online.Comment: Published in Proc. IEEE Conf. Computer Vision and Pattern Recognition
201
- …